Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy
نویسندگان
چکیده
The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants' audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.
منابع مشابه
The influence of infant-directed speech on 12-month-olds' intersensory perception of fluent speech.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio-visual fluent speech in 12-month-old infants. German-learning infants' audio-visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, ...
متن کاملCross-modal binding and activated attentional networks during audio-visual speech integration: a functional MRI study.
We evaluated the neural substrates of cross-modal binding and divided attention during audio-visual speech integration using functional magnetic resonance imaging. The subjects (n = 17) were exposed to phonemically concordant or discordant auditory and visual speech stimuli. Three different matching tasks were performed: auditory-auditory (AA), visual-visual (VV) and auditory-visual (AV). Subje...
متن کاملTwo-Level Bimodal Association for Audio-Visual Speech Recognition
This paper proposes a new method for bimodal information fusion in audio-visual speech recognition, where cross-modal association is considered in two levels. First, the acoustic and the visual data streams are combined at the feature level by using the canonical correlation analysis, which deals with the problems of audio-visual synchronization and utilizing the cross-modal correlation. Second...
متن کاملMessage vs. messenger effects on cross-modal matching for spoken phrases
A core issue in speech perception and word recognition research is the nature of information perceivers use to identify spoken utterances across indexical variations in their phonetic details, such as talker and accent differences. Separately, a crucial question in audio-visual research is the nature of information perceivers use to recognize phonetic congruency between the audio and visual (ta...
متن کاملLexical mediation between sight and sound in speechreading.
In two experiments, we investigated whether simultaneous speech reading can influence the detection of speech in envelope-matched noise. Subjects attempted to detect the presence of a disyllabic utterance in noise while watching a speaker articulate a matching or a non-matching utterance. Speech detection was not facilitated by an audio-visual match, which suggests that listeners relied on low-...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره 9 شماره
صفحات -
تاریخ انتشار 2014